22 research outputs found
Strong Scaling of Matrix Multiplication Algorithms and Memory-Independent Communication Lower Bounds
A parallel algorithm has perfect strong scaling if its running time on P
processors is linear in 1/P, including all communication costs.
Distributed-memory parallel algorithms for matrix multiplication with perfect
strong scaling have only recently been found. One is based on classical matrix
multiplication (Solomonik and Demmel, 2011), and one is based on Strassen's
fast matrix multiplication (Ballard, Demmel, Holtz, Lipshitz, and Schwartz,
2012). Both algorithms scale perfectly, but only up to some number of
processors where the inter-processor communication no longer scales.
We obtain a memory-independent communication cost lower bound on classical
and Strassen-based distributed-memory matrix multiplication algorithms. These
bounds imply that no classical or Strassen-based parallel matrix multiplication
algorithm can strongly scale perfectly beyond the ranges already attained by
the two parallel algorithms mentioned above. The memory-independent bounds and
the strong scaling bounds generalize to other algorithms.Comment: 4 pages, 1 figur
Improving the numerical stability of fast matrix multiplication
Fast algorithms for matrix multiplication, namely those that perform
asymptotically fewer scalar operations than the classical algorithm, have been
considered primarily of theoretical interest. Apart from Strassen's original
algorithm, few fast algorithms have been efficiently implemented or used in
practical applications. However, there exist many practical alternatives to
Strassen's algorithm with varying performance and numerical properties. Fast
algorithms are known to be numerically stable, but because their error bounds
are slightly weaker than the classical algorithm, they are not used even in
cases where they provide a performance benefit.
We argue in this paper that the numerical sacrifice of fast algorithms,
particularly for the typical use cases of practical algorithms, is not
prohibitive, and we explore ways to improve the accuracy both theoretically and
empirically. The numerical accuracy of fast matrix multiplication depends on
properties of the algorithm and of the input matrices, and we consider both
contributions independently. We generalize and tighten previous error analyses
of fast algorithms and compare their properties. We discuss algorithmic
techniques for improving the error guarantees from two perspectives:
manipulating the algorithms, and reducing input anomalies by various forms of
diagonal scaling. Finally, we benchmark performance and demonstrate our
improved numerical accuracy
Parallelizing Gaussian Process Calculations in R
We consider parallel computation for Gaussian process calculations to overcome computational and memory constraints on the size of datasets that can be analyzed. Using a hybrid parallelization approach that uses both threading (shared memory) and message-passing (distributed memory), we implement the core linear algebra operations used in spatial statistics and Gaussian process regression in an R package called bigGP that relies on C and MPI. The approach divides the covariance matrix into blocks such that the computational load is balanced across processes while communication between processes is limited. The package provides an API enabling R programmers to implement Gaussian process-based methods by using the distributed linear algebra operations without any C or MPI coding. We illustrate the approach and software by analyzing an astrophysics dataset with n = 67, 275 observations
Communication-optimal parallel algorithm for strassen's matrix multiplication
Parallel matrix multiplication is one of the most studied fundamental
problems in distributed and high performance computing. We obtain a new
parallel algorithm that is based on Strassen's fast matrix multiplication and
minimizes communication. The algorithm outperforms all known parallel matrix
multiplication algorithms, classical and Strassen-based, both asymptotically
and in practice.
A critical bottleneck in parallelizing Strassen's algorithm is the
communication between the processors. Ballard, Demmel, Holtz, and Schwartz
(SPAA'11) prove lower bounds on these communication costs, using expansion
properties of the underlying computation graph. Our algorithm matches these
lower bounds, and so is communication-optimal. It exhibits perfect strong
scaling within the maximum possible range.
Benchmarking our implementation on a Cray XT4, we obtain speedups over
classical and Strassen-based algorithms ranging from 24% to 184% for a fixed
matrix dimension n=94080, where the number of nodes ranges from 49 to 7203.
Our parallelization approach generalizes to other fast matrix multiplication
algorithms.Comment: 13 pages, 3 figure